9 research outputs found

    Neuromorphic vision-based tactile sensor for robotic grasp

    Get PDF
    Tactile sensors are developed to mimic human sense of touch in robotics. The touch sense is essential for machines to interact with environment. Several approaches have been studied to obtain rich information from the contact point to correct robot’s actions and acquire further information about the objects. Vision-based tactile sensors aim to extract tactile information by observing the contact point between the robot’s hand and environment and applying computer vision algorithms. In this thesis, a novel class of vision-based tactile sensors is proposed, "Neuromorphic Vision-Based Tactile Sensor" to estimate the contact force and classify materials in a grasp. This novel approach utilises a neuromorphic vision sensor to capture intensity changes (events) in the contact point. The triggered events represent changes in the contact force at each pixel in microseconds. The proposed sensor has a high temporal resolution and dynamic range which are suitable for high-speed robotic applications. Initially, a general framework is demonstrated to show the sensor operations. Furthermore, the relationship between events and the contact force is presented. Afterwards, methods based on Time-Delay Neural Networks (TDNN), Gaussian Process (GP) and Deep Neural Networks (DNN) are developed to estimate the contact force and classify objects material from the accumulation of events. The results indicate a low mean squared error of 0.17N against a force sensor for the force estimation using TDNN. Moreover, the objects materials are classified with 79.12% accuracy which is 30% higher compared to piezoresistive force sensors. This is followed by an approach to preserve spatio-temporal information during the learning process. Therefore, the triggered events are framed (event-frames) within a time window to preserve spatial information. Afterwards, multiple types of Long Short-Term Memory (LSTM) networks with convolutional layers are developed to estimate the contact force for objects with different size. The results are validated against a force sensor and achieve a mean squared error of less than 0.1N. Finally, algorithmic augmentation techniques are investigated to improve the networks accuracy for a wider range of force. Image-based and time-series augmentation methods are developed to generate artificial samples for training the network. A novel time-domain approach Temporal Event Shifting (TES) is proposed to augment events by preserving the spatial information of events. The results are validated on real experiments which indicate that time-domain and hybrid augmentation methods improve the networks’ accuracy significantly considering an object with a different size

    A novel event-based incipient slip detection using Dynamic Active-Pixel Vision Sensor (DAVIS)

    Get PDF
    In this paper, a novel approach to detect incipient slip based on the contact area between a transparent silicone medium and different objects using a neuromorphic event-based vision sensor (DAVIS) is proposed. Event-based algorithms are developed to detect incipient slip, slip, stress distribution and object vibration. Thirty-seven experiments were performed on five objects with different sizes, shapes, materials and weights to compare precision and response time of the proposed approach. The proposed approach is validated by using a high speed constitutional camera (1000 FPS). The results indicate that the sensor can detect incipient slippage with an average of 44.1 ms latency in unstructured environment for various objects. It is worth mentioning that the experiments were conducted in an uncontrolled experimental environment, therefore adding high noise levels that affected results significantly. However, eleven of the experiments had a detection latency below 10 ms which shows the capability of this method. The results are very promising and show a high potential of the sensor being used for manipulation applications especially in dynamic environments

    Dynamic-vision-based force measurements using convolutional recurrent neural networks

    Get PDF
    In this paper, a novel dynamic Vision-Based Measurement method is proposed to measure contact force independent of the object sizes. A neuromorphic camera (Dynamic Vision Sensor) is utilizused to observe intensity changes within the silicone membrane where the object is in contact. Three deep Long Short-Term Memory neural networks combined with convolutional layers are developed and implemented to estimate the contact force from intensity changes over time. Thirty-five experiments are conducted using three objects with different sizes to validate the proposed approach. We demonstrate that the networks with memory gates are robust against variable contact sizes as the networks learn object sizes in the early stage of a grasp. Moreover, spatial and temporal features enable the sensor to estimate the contact force every 10 ms accurately. The results are promising with Mean Squared Error of less than 0.1 N for grasping and holding contact force using leave-one-out cross-validation method

    Event augmentation for contact force measurements

    No full text

    Asynchronous Events-based Panoptic Segmentation using Graph Mixer Neural Network

    Full text link
    In the context of robotic grasping, object segmentation encounters several difficulties when faced with dynamic conditions such as real-time operation, occlusion, low lighting, motion blur, and object size variability. In response to these challenges, we propose the Graph Mixer Neural Network that includes a novel collaborative contextual mixing layer, applied to 3D event graphs formed on asynchronous events. The proposed layer is designed to spread spatiotemporal correlation within an event graph at four nearest neighbor levels parallelly. We evaluate the effectiveness of our proposed method on the Event-based Segmentation (ESD) Dataset, which includes five unique image degradation challenges, including occlusion, blur, brightness, trajectory, scale variance, and segmentation of known and unknown objects. The results show that our proposed approach outperforms state-of-the-art methods in terms of mean intersection over the union and pixel accuracy. Code available at: https://github.com/sanket0707/GNN-Mixer.gitComment: 9 pages, 6 figure
    corecore